The goal of multimodal abstractive summarization (MAS) is to produce a concise summary given the multimodal data (text and vision). Existing studies on MAS mainly focus on how to effectively use the extracted visual features, having achieved impressive success on the high-resource English dataset. However, less attention has been paid to the quality of the visual features to the summary, which may limit the model performance especially in the low- and zero-resource scenarios. In this paper, we propose to improve the summary quality through summary-oriented visual features. To this end, we devise two auxiliary tasks including \emph{vision to summary task} and \emph{masked image modeling task}. Together with the main summarization task, we optimize the MAS model via the training objectives of all these tasks. By these means, the MAS model can be enhanced by capturing the summary-oriented visual features, thereby yielding more accurate summaries. Experiments on 44 languages, covering mid-high-, low-, and zero-resource scenarios, verify the effectiveness and superiority of the proposed approach, which achieves state-of-the-art performance under all scenarios.
translated by 谷歌翻译
Cross-Lingual Summarization (CLS) aims at generating summaries in one language for the given documents in another language. CLS has attracted wide research attention due to its practical significance in the multi-lingual world. Though great contributions have been made, existing CLS works typically focus on short documents, such as news articles, short dialogues and guides. Different from these short texts, long documents such as academic articles and business reports usually discuss complicated subjects and consist of thousands of words, making them non-trivial to process and summarize. To promote CLS research on long documents, we construct Perseus, the first long-document CLS dataset which collects about 94K Chinese scientific documents paired with English summaries. The average length of documents in Perseus is more than two thousand tokens. As a preliminary study on long-document CLS, we build and evaluate various CLS baselines, including pipeline and end-to-end methods. Experimental results on Perseus show the superiority of the end-to-end baseline, outperforming the strong pipeline models equipped with sophisticated machine translation systems. Furthermore, to provide a deeper understanding, we manually analyze the model outputs and discuss specific challenges faced by current approaches. We hope that our work could benchmark long-document CLS and benefit future studies.
translated by 谷歌翻译
基于RGB图像的人类姿势估计(HPE)经历了从深度学习中受益的快速发展。但是,基于事件的HPE尚未得到充分研究,这仍然是在极端场景和关键效率条件下应用的巨大潜力。在本文中,我们是第一个直接从3D事件点云中估算2D人类姿势的人。我们提出了一个新颖的事件表示,即栅格的事件点云,将事件汇总在小时切片的相同位置上。它保持了来自多个统计提示的3D功能,并显着降低了记忆消耗和计算复杂性,这在我们的工作中很有效。然后,我们利用两种不同的骨干,点网,DGCNN和点变压器来利用栅格化事件点云,并使用两个线性层解码器来预测人关键点的位置。我们发现,基于我们的方法,PointNet以更快的速度实现了令人鼓舞的结果,而点传感器的精度也更高,甚至接近以前的基于事件框架的方法。一组全面的结果表明,在事件驱动的人姿势估计中,我们提出的方法对这些3D主干模型始终有效。我们基于2048点输入的PointNet的方法在DHP19数据集上的MPJPE3D中实现了82.46mm,而在NVIDIA Jetson jetson Xavier NX Edge Computing Platform上仅具有12.29ms的延迟,理想地适合于实时检测事件Cameras。代码将在https://github.com/masterhow/eventpointpose上公开制作。
translated by 谷歌翻译
体育比赛摘要旨在从实时评论产生体育新闻。但是,现有数据集全部通过自动收集和清洁过程构建,导致大量噪音。此外,目前的作品忽视了现场评论和体育新闻之间的知识差距,这限制了体育比赛摘要的表现。在本文中,我们介绍了K-Sportssum,一个具有两个特征的新数据集:(1)K-Sportssum从大规模游戏中收集大量数据。它有7,854个评论新闻性对。为了提高质量,K-Sportssum采用手动清洁过程; (2)与现有数据集不同,为了缩小知识缺口,K-Sportssum进一步提供了一个大型知识语料库,其中包含523名运动队和14,724名体育运动者的信息。此外,我们还介绍了一个知识增强的摘要,它利用实时评论和知识来生成体育新闻。关于K-Sportssum和Sportssum数据集的广泛实验表明,我们的模型实现了新的最先进的表演。定性分析和人类研究进一步验证我们的模型产生更具信息丰富的体育新闻。
translated by 谷歌翻译
Given a document in a source language, cross-lingual summarization (CLS) aims at generating a concise summary in a different target language. Unlike monolingual summarization (MS), naturally occurring source-language documents paired with target-language summaries are rare. To collect large-scale CLS samples, existing datasets typically involve translation in their creation. However, the translated text is distinguished from the text originally written in that language, i.e., translationese. Though many efforts have been devoted to CLS, none of them notice the phenomenon of translationese. In this paper, we first confirm that the different approaches to constructing CLS datasets will lead to different degrees of translationese. Then we design systematic experiments to investigate how translationese affects CLS model evaluation and performance when it appears in source documents or target summaries. In detail, we find that (1) the translationese in documents or summaries of test sets might lead to the discrepancy between human judgment and automatic evaluation; (2) the translationese in training sets would harm model performance in the real scene; (3) though machine-translated documents involve translationese, they are very useful for building CLS systems on low-resource languages under specific training strategies. Furthermore, we give suggestions for future CLS research including dataset and model developments. We hope that our work could let researchers notice the phenomenon of translationese in CLS and take it into account in the future.
translated by 谷歌翻译
Deep learning methods have contributed substantially to the rapid advancement of medical image segmentation, the quality of which relies on the suitable design of loss functions. Popular loss functions, including the cross-entropy and dice losses, often fall short of boundary detection, thereby limiting high-resolution downstream applications such as automated diagnoses and procedures. We developed a novel loss function that is tailored to reflect the boundary information to enhance the boundary detection. As the contrast between segmentation and background regions along the classification boundary naturally induces heterogeneity over the pixels, we propose the piece-wise two-sample t-test augmented (PTA) loss that is infused with the statistical test for such heterogeneity. We demonstrate the improved boundary detection power of the PTA loss compared to benchmark losses without a t-test component.
translated by 谷歌翻译
体育游戏摘要旨在根据实时评论生成体育新闻。该任务吸引了广泛的研究关注,但由于缺乏相应的英语数据集,但仍未探索。因此,在本文中,我们发布了第一个英语体育游戏摘要数据集的目标。具体而言,目标有103个评论新对,评论和新闻的平均长度分别为2724.9和476.3个字。此外,为了支持半监督环境中的研究,目标还提供了2,160个未标记的评论文件。基于我们的目标,我们建立和评估了几个基线,包括提取性和抽象基线。实验结果表明,此任务的挑战仍然存在。我们希望我们的工作能够促进体育游戏总结的研究。该数据集已在https://github.com/krystalan/goal上发布。
translated by 谷歌翻译
与外部知识的接地对话系统是提高响应质量的一种有希望的方法。大多数现有的作品采用知识图(KGS)作为外部资源,关注对话的最后一句话中实体的贡献,以了解上下文理解和响应。然而,在多转变环境中隐含的知识与公斤关系之间的过渡规律之间的相关性是不足的。为此,我们提出了一个关系过渡意识知识的对话生成模型(RT-KGD)。具体而言,受到人类对话潜在逻辑的启发,我们的模型将对话级别的关系过渡规律与转向级实体语义信息相结合。以这种方式,知识之间的相互作用被认为是产生丰富的线索,以预测适当的知识并产生相干响应。自动评估和手动评估的实验结果表明,我们的模型表现优于最先进的基准。
translated by 谷歌翻译
跨语性摘要是用一种语言(例如英语)以不同语言(例如中文)生成一种语言(例如英语)的摘要。在全球化背景下,这项任务吸引了计算语言学界的越来越多的关注。然而,对于这项任务仍然缺乏全面的审查。因此,我们在该领域的数据集,方法和挑战上介绍了第一个系统的批判性审查。具体而言,我们分别根据不同的构造方法和解决方案范例仔细组织现有的数据集和方法。对于每种类型的数据集或方法,我们彻底介绍并总结了以前的努力,并将它们相互比较以提供更深入的分析。最后,我们还讨论了有希望的方向,并提供了我们的思想,以促进未来的研究。这项调查适用于跨语性摘要的初学者和专家,我们希望它将成为起点,也可以为对该领域感兴趣的研究人员和工程师提供新的想法。
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译